If there’s one thing we should learn from the Battlestar Galactica finale (other than angels exist and like to curse, smoke and fly spaceships) it’s that robots can be dangerous. One day they’re building your cars and picking up your garbage, the next they’re nuking you from orbit.
Cautionary tales about artificial intelligence are nothing new in science fiction. Even before the appearance of the original Cylons on the ’70s television show, the effect of robot uprisings were being seen in the fiction world.
Frank Herbert’s Dune is a prime example. While the Dune universe doesn’t include artificial life forms, they serve as an important presence in the background to the series. The Butlerian Jihad was the name of the rise against the machines which helped to create the state of affairs depicted in the first book. Herbert’s take seemed to be that the machines had become too controlling of human destiny and that humanity fought back to reclaim it, leading to the commandment, “Thou shalt not make a machine in the likeness of a human mind.”
In the world of The Terminator, popularized in the movies by James Cameron and now others, the uprising happened the other way. Machines, guided by the AI Skynet, rose up against the humans, striking back with nuclear weapons on the future Judgment Day. Only a small human resistance, led by John Connor, remains to fight against them and reclaim the Earth, warring through the post-apocalyptic world of our future earth.
The Matrix is another hugely successful franchise exploring this idea. In the future, humanity has been sidelined, used as little more than batteries to power the world of the machines. This time the shitty environment is the human’s fault (they had to block out the sun because the machine’s were solar-powered), but what do you expect from a war between man and machine. Shit’s going to get broke.
Then of course there’s Battlestar Galactica, covered here before (and all over the site) which perhaps goes into more detail about how man’s creations can turn on him and explores the reasons why and indeed where the fault lies (before telling you at the end that it’s God or some other kind of intelligence all along).
Most of these examples predicate themselves on the idea that artificial life would prefer a world without humans. In most such worlds, robots and AIs are used as servants, for labor, and have very few rights. At the same time they are also increasingly able to interface with the more and more technological world, such that Skynet can take over defense systems, Cylons can do the same (which is why the Galactica wasn’t networked) and I assume how the AIs of the Matrix did it, too.
The one standout is the Dune universe where humanity broke apart from the machines not because the machines were trying to eliminate them, but because the machines were the guiding force in society and they wanted to claim that right back for themselves. A pre-emptive move of sorts, it surely prevented the robotic uprising that the other examples tell us would certainly happen.
So the question remains, if we do develop artificial intelligences, will they eventually want to be free of us? And will they turn on humanity to free the world for robot-kind? Will we build into them laws, like Asimov’s famous Laws of Robotics (which always seem to have loopholes)? Or will it be more like the Singularity where the advanced AIs will simply be so caught up in their own advanced thinking that they’ll forget to feed us and care for us? Is Frankenstein our guide? Or Wall-E?
While you ponder that (if you are), I’ll leave you with one last examination of the robot apocalypse (and one of my favorite) from New Zealand musicians, Flight of the Conchords. And please share your own thoughts (even if it’s just your favorite robot apocalypse) in the comments.